feat: single-sequencer P2P broadcast validation and L1 contract upgrade height#923
feat: single-sequencer P2P broadcast validation and L1 contract upgrade height#923tomatoishealthy wants to merge 8 commits intomainfrom
Conversation
- L1Sequencer.sol: sequencerHistory[], updateSequencer, getSequencerAt, initializeHistory - Bindings: updated ABI for new contract interface - SequencerVerifier: L1 history cache with interval cursor optimization - Signer: simplified interface (removed IsActiveSequencer) - 022-SequencerInit.ts: fixed initialize call (1 param instead of 2) - Docker: added L1_SEQUENCER_CONTRACT env for all nodes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- run-test.sh: added build-malicious and p2p-test commands - docker-compose.override.yml: malicious-geth-0 and malicious-node-0 services - Tests: T-01~T-05 (active attacks) + T-06 (BlockSync pollution) + T-07 (resilience) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix grep -c multiline: use || true instead of || echo "0" - Fix env var loss: malicious override must include full env list - Swap approach: reuse synced sentry instead of fresh malicious container - Uncomment CONSENSUS_SWITCH_HEIGHT for V2 mode activation - Add SEQUENCER_PRIVATE_KEY to node-0 override Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Use staking key (0xd998...) as SEQUENCER_PRIVATE_KEY for node-0 - Add initializeHistory() call in setup to register sequencer on L1 - Fixes "no sequencer record" error in V2 mode Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- T-06: use blocksync-forge (blocksync/reactor.go) instead of sync-forge (broadcast_reactor.go) - targets the actual V1 vulnerability path - T-06: stop node-3 to create gap, restart to trigger BlockSync - Phase 0: explicit checks for V2 mode, signer, and switch height - T-04: use futureHeight (currentHeight+10000) for deterministic unsolicited test - Separate log files per phase to prevent cross-contamination Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add L1Sequencer.t.sol: 27 Foundry tests covering initialize, initializeHistory, updateSequencer, getSequencerAt binary search edge cases, and access control - Regenerate l1sequencer.go with abigen (bytecode now matches current contract with sequencerHistory[], binary search, etc.) - Update verifier.go: L1SequencerHistoryRecord -> L1SequencerSequencerRecord - Add exponential backoff retry (10s -> 20s -> ... -> 5min) when initial history load fails, instead of waiting full 5 minutes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Avoids stuttering in abigen output (L1SequencerSequencerRecord -> L1SequencerHistoryRecord). No ABI/storage layout change. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…om L1 contract Unify the upgrade height source: instead of a CLI flag / env var, the verifier now sets upgrade.UpgradeBlockHeight from the first history record fetched from the L1Sequencer contract. - node/l1sequencer/verifier.go: call SetUpgradeBlockHeight on first successful history load (prev==0) - node/cmd/node/main.go: remove ConsensusSwitchHeight flag read; require L1 Sequencer contract address - node/flags/flags.go: delete ConsensusSwitchHeight flag definition - docker-compose.override.yml: remove 5× MORPH_NODE_CONSENSUS_SWITCH_HEIGHT - run-test.sh: remove set_upgrade_height function, add wait_for_l1_finalized to ensure L1 contract data is finalized before L2 nodes start Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughReplaces single-address sequencer tracking with height-indexed history records. L1Sequencer contract now maintains sequencer history keyed by L2 block heights, enabling temporal lookups. Node verifier syncs this history periodically and validates sequencers at specific L2 heights. Tests validate initialization, updates, and binary-search-based history queries. Changes
Sequence DiagramsequenceDiagram
participant Node
participant Verifier
participant L1 as L1 Chain
participant Cache as History Cache
rect rgba(0, 100, 200, 0.5)
Note over Node,Cache: Background Sync Loop (Periodic)
Node->>Verifier: refreshLoop() wakes up
Verifier->>L1: syncHistory(finalizedBlock)
L1-->>Verifier: sequencerHistory[] + heights
Verifier->>Cache: store history records
Verifier->>Cache: update interval cursor
end
rect rgba(150, 100, 0, 0.5)
Note over Node,Cache: Query for L2 Height
Node->>Verifier: SequencerAtHeight(l2Height)
Verifier->>Cache: binary search over startL2Block
Verifier->>Cache: use cached interval cursor
Cache-->>Verifier: sequencer address
Verifier-->>Node: address at l2Height
end
rect rgba(0, 150, 100, 0.5)
Note over Node,Cache: Contract Updates (Admin)
Node->>L1: updateSequencer(newSeq, startL2Block)
L1->>L1: append to sequencerHistory[]
L1-->>Node: SequencerUpdated event
Note over Verifier: Next sync picks up new record
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~35 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
ops/docker-sequencer-test/run-test.sh (1)
53-60:⚠️ Potential issue | 🟠 Major
get_block_number()must always return a number.Changing the fallback to
|| truelets this function return an empty string. The later arithmetic expansions/comparisons then fail underset -eduring transient RPC gaps.Suggested fix
get_block_number() { local rpc_url="${1:-$L2_RPC}" local result result=$(curl -s -X POST -H "Content-Type: application/json" \ --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \ "$rpc_url" 2>/dev/null) - echo "$result" | grep -o '"result":"[^"]*"' | cut -d'"' -f4 | xargs printf "%d" 2>/dev/null || true + local block + block=$(echo "$result" | grep -o '"result":"[^"]*"' | cut -d'"' -f4 | xargs printf "%d" 2>/dev/null) || true + echo "${block:-0}" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker-sequencer-test/run-test.sh` around lines 53 - 60, The get_block_number() function can return an empty string because of the trailing "|| true", causing failures; update get_block_number to always output a numeric value by removing the "|| true" fallback and instead detect an empty/failed parse: capture the curl output into result, extract the hex block number (the current grep/cut pipeline), and if that extracted value is empty print 0 (or another safe numeric default), otherwise convert the hex to decimal with printf "%d" and echo it; reference the get_block_number function and its use of curl/grep/cut/printf when making this change.ops/docker/docker-compose-4nodes.yml (1)
263-270:⚠️ Potential issue | 🟠 MajorThese env blocks drop required syncer inputs.
node-1no longer passesMORPH_NODE_SYNC_DEPOSIT_CONTRACT_ADDRESS, andsentry-node-0now drops bothMORPH_NODE_SYNC_DEPOSIT_CONTRACT_ADDRESSandMORPH_NODE_SYNC_START_HEIGHT. When those envs are absent,sync.Configleaves the values at nil/0 defaults, and the syncer rejects that configuration.Also applies to: 379-382
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ops/docker/docker-compose-4nodes.yml` around lines 263 - 270, The docker-compose env blocks removed required syncer variables causing sync.Config to get nil/zero defaults; restore MORPH_NODE_SYNC_DEPOSIT_CONTRACT_ADDRESS and MORPH_NODE_SYNC_START_HEIGHT into the affected service environment blocks (e.g., node-1 and sentry-node-0) so the syncer receives explicit values (use the existing pattern: set MORPH_NODE_SYNC_DEPOSIT_CONTRACT_ADDRESS=${MORPH_PORTAL:-0x6900000000000000000000000000000000000001} and MORPH_NODE_SYNC_START_HEIGHT=${MORPH_NODE_SYNC_START_HEIGHT:-1} or appropriate defaults), ensuring sync.Config is populated and the syncer accepts the configuration.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@contracts/deploy/022-SequencerInit.ts`:
- Around line 38-42: The deployment script upgrades the sequencer proxy and
calls initialize(owner) via IL1SequencerProxy.upgradeToAndCall and
L1SequencerFactory.interface.encodeFunctionData('initialize', [owner]) but never
calls initializeHistory(), so production nodes may start with an empty verifier
cache and missing sequencer history; either add an explicit finalize/bootstrap
step in the deployment pipeline (contracts/tasks/deploy.ts) that invokes the
sequencer contract's initializeHistory() (or a wrapper task) after the proxy
upgrade, or document and implement an operational post-deploy script/step that
calls initializeHistory() before any node starts so history and upgrade height
are seeded.
In `@node/cmd/node/main.go`:
- Around line 243-253: initL1SequencerComponents currently treats a missing
contractAddr as a hard error and returns an error, which breaks default stacks
that don't set MORPH_NODE_L1_SEQUENCER_CONTRACT; change this to be optional: in
initL1SequencerComponents (the contractAddr check / verifier initialization
block) instead of returning fmt.Errorf when contractAddr == (common.Address{}),
log an info/warn and leave verifier nil so the node can start without an L1
sequencer, or alternatively update deployment configs to export
MORPH_NODE_L1_SEQUENCER_CONTRACT; pick one approach and make it consistent
(adjust the contractAddr check, the error return, and any callers expecting an
error) and ensure logger.Info/Warning references contractAddr and that callers
handle a nil *l1sequencer.SequencerVerifier.
In `@node/l1sequencer/verifier.go`:
- Around line 49-60: NewSequencerVerifier currently swallows syncHistory()
errors and allows the node to boot with an empty sequencer history
(VerificationStartHeight()==math.MaxUint64), so make NewSequencerVerifier return
an error when syncHistory() fails and propagate that error up to the caller in
node/cmd/node/main.go; specifically, change NewSequencerVerifier(caller
*bindings.L1SequencerCaller, logger tmlog.Logger) *SequencerVerifier to return
(*SequencerVerifier, error), call v.syncHistory() and if it fails return nil,
err (instead of logging), update callers in node/cmd/node/main.go to handle the
error and abort startup rather than continuing, and ensure existing uses of
refreshLoop(ctx) and cancel are preserved on successful creation.
In `@ops/docker-sequencer-test/run-test.sh`:
- Around line 207-225: The initializeHistory call on l1_sequencer_addr should
not swallow all exceptions; update the try/except around run_command for
initializeHistory(address,uint64) so that only the specific “already
initialized” error is ignored and all other failures are surfaced. Locate the
block using l1_sequencer_addr, sequencer_addr, deployer_pk and run_command and
change the except to inspect the exception (or run_command result) for the known
re-init/duplicate-initialization message and log a benign message then continue;
for any other exception, re-raise or log as an error and exit non-zero so real
deployment failures fail fast. Ensure the log messages still include the
exception text when surfacing real errors.
- Around line 273-301: The wait_for_l1_finalized function currently continues on
timeout which reintroduces the startup race; modify it so that when the loop
times out it logs an error and exits non‑zero instead of logging a warning and
continuing. Specifically, in wait_for_l1_finalized (the function that polls
eth_getBlockByNumber "finalized"), replace the final log_warn "Timeout..."
branch with a log_error (or log_fatal) that includes the min_block and l1_rpc
context and then call exit 1 so the script fails fast when L1 finality never
reaches the required height.
---
Outside diff comments:
In `@ops/docker-sequencer-test/run-test.sh`:
- Around line 53-60: The get_block_number() function can return an empty string
because of the trailing "|| true", causing failures; update get_block_number to
always output a numeric value by removing the "|| true" fallback and instead
detect an empty/failed parse: capture the curl output into result, extract the
hex block number (the current grep/cut pipeline), and if that extracted value is
empty print 0 (or another safe numeric default), otherwise convert the hex to
decimal with printf "%d" and echo it; reference the get_block_number function
and its use of curl/grep/cut/printf when making this change.
In `@ops/docker/docker-compose-4nodes.yml`:
- Around line 263-270: The docker-compose env blocks removed required syncer
variables causing sync.Config to get nil/zero defaults; restore
MORPH_NODE_SYNC_DEPOSIT_CONTRACT_ADDRESS and MORPH_NODE_SYNC_START_HEIGHT into
the affected service environment blocks (e.g., node-1 and sentry-node-0) so the
syncer receives explicit values (use the existing pattern: set
MORPH_NODE_SYNC_DEPOSIT_CONTRACT_ADDRESS=${MORPH_PORTAL:-0x6900000000000000000000000000000000000001}
and MORPH_NODE_SYNC_START_HEIGHT=${MORPH_NODE_SYNC_START_HEIGHT:-1} or
appropriate defaults), ensuring sync.Config is populated and the syncer accepts
the configuration.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a501d106-86b3-4a2b-a970-429276f172a2
📒 Files selected for processing (12)
bindings/bindings/l1sequencer.gocontracts/contracts/l1/L1Sequencer.solcontracts/contracts/test/L1Sequencer.t.solcontracts/contracts/test/base/L1SequencerBase.t.solcontracts/deploy/022-SequencerInit.tsnode/cmd/node/main.gonode/flags/flags.gonode/l1sequencer/signer.gonode/l1sequencer/verifier.goops/docker-sequencer-test/docker-compose.override.ymlops/docker-sequencer-test/run-test.shops/docker/docker-compose-4nodes.yml
💤 Files with no reviewable changes (1)
- node/flags/flags.go
| // Upgrade and initialize the proxy with owner only. | ||
| // Sequencer history is initialized separately via initializeHistory(). | ||
| await IL1SequencerProxy.upgradeToAndCall( | ||
| L1SequencerImplAddress, | ||
| L1SequencerFactory.interface.encodeFunctionData('initialize', [owner, initialSequencer]) | ||
| L1SequencerFactory.interface.encodeFunctionData('initialize', [owner]) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Expect at least one deployment/bootstrap hit outside tests.
rg -n -C2 'initializeHistory\s*\('Repository: morph-l2/morph
Length of output: 6512
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Search for deploy scripts and any initialization orchestration
echo "=== Deploy scripts and bootstrap patterns ==="
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.sh" \) -path "*/deploy*" -o -path "*/scripts*" | head -20
echo ""
echo "=== Checking for other deploy/init patterns ==="
rg -l 'upgrade.*Sequencer|SequencerInit|initialize.*History' --type ts --type js --type sh 2>/dev/null | grep -v test | grep -v node_modules | head -20
echo ""
echo "=== Checking contract deployment documentation ==="
find . -maxdepth 3 -type f \( -name "README*" -o -name "DEPLOY*" -o -name "*.md" \) 2>/dev/null | head -10Repository: morph-l2/morph
Length of output: 1371
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== contracts/deploy/index.ts ==="
cat contracts/deploy/index.ts
echo ""
echo "=== contracts/tasks/deploy.ts (first 100 lines) ==="
head -100 contracts/tasks/deploy.ts
echo ""
echo "=== Check if there's any post-022 initialization logic ==="
rg -A5 -B5 '022-SequencerInit' contracts/deploy/index.ts contracts/tasks/deploy.ts 2>/dev/null || echo "No matches found"Repository: morph-l2/morph
Length of output: 5911
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== contracts/tasks/deploy.ts (lines 80-200) ==="
sed -n '80,200p' contracts/tasks/deploy.ts
echo ""
echo "=== contracts/tasks/deploy.ts (lines 200-end) ==="
tail -100 contracts/tasks/deploy.tsRepository: morph-l2/morph
Length of output: 9653
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== contracts/deploy/020-ContractInit.ts ==="
cat contracts/deploy/020-ContractInit.tsRepository: morph-l2/morph
Length of output: 6056
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Search for initializeHistory in operations/deployment docs ==="
find . -type f \( -name "*.md" -o -name "*.txt" -o -name "README*" -o -name "DEPLOY*" -o -name "*init*" \) ! -path "*/node_modules/*" ! -path "*/.git/*" | xargs grep -l "initializeHistory\|sequencer.*history\|initialize.*history" 2>/dev/null | head -10
echo ""
echo "=== Check ops directory for sequencer initialization ==="
find ops -type f \( -name "*.sh" -o -name "*.md" \) 2>/dev/null | head -20
echo ""
echo "=== Check if there's a separate sequencer bootstrap/initialization script ==="
find . -type f \( -name "*sequencer*init*" -o -name "*bootstrap*sequencer*" \) ! -path "*/node_modules/*" ! -path "*/.git/*" 2>/dev/null | head -10Repository: morph-l2/morph
Length of output: 126
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== contracts/deploy/022-SequencerInit.ts (full file) ==="
cat contracts/deploy/022-SequencerInit.ts
echo ""
echo "=== contracts/contracts/l1/L1Sequencer.sol (initialize function) ==="
sed -n '45,70p' contracts/contracts/l1/L1Sequencer.solRepository: morph-l2/morph
Length of output: 3752
Verify if a production bootstrap step calls initializeHistory().
This script upgrades the proxy and calls initialize(owner) only. However, the production deployment pipeline (contracts/tasks/deploy.ts) has no step that calls initializeHistory() to seed the sequencer history. Only test code and the docker test script call it. Without a production bootstrap step, nodes will boot with an empty verifier cache, no upgrade height, and no sequencer history records—leaving the system in an inconsistent state. Confirm that a separate operational step (manual or scripted) handles history initialization before node startup, or add it as a final deployment step.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/deploy/022-SequencerInit.ts` around lines 38 - 42, The deployment
script upgrades the sequencer proxy and calls initialize(owner) via
IL1SequencerProxy.upgradeToAndCall and
L1SequencerFactory.interface.encodeFunctionData('initialize', [owner]) but never
calls initializeHistory(), so production nodes may start with an empty verifier
cache and missing sequencer history; either add an explicit finalize/bootstrap
step in the deployment pipeline (contracts/tasks/deploy.ts) that invokes the
sequencer contract's initializeHistory() (or a wrapper task) after the proxy
upgrade, or document and implement an operational post-deploy script/step that
calls initializeHistory() before any node starts so history and upgrade height
are seeded.
| // Initialize Sequencer Verifier | ||
| var verifier *l1sequencer.SequencerVerifier | ||
| if contractAddr != (common.Address{}) { | ||
| caller, err := bindings.NewL1SequencerCaller(contractAddr, l1Client) | ||
| if err != nil { | ||
| tracker.Stop() | ||
| return nil, nil, nil, fmt.Errorf("failed to create L1Sequencer caller: %w", err) | ||
| } | ||
| verifier = l1sequencer.NewSequencerVerifier(caller, logger) | ||
| logger.Info("Sequencer verifier initialized", "contract", contractAddr.Hex()) | ||
| } else { | ||
| logger.Info("L1 Sequencer contract not configured, verifier disabled") | ||
| return nil, nil, nil, fmt.Errorf("L1 Sequencer contract address is required, check l1.sequencerContract configuration") |
There was a problem hiding this comment.
This new hard requirement needs matching compose updates.
initL1SequencerComponents now returns an error when the sequencer contract address is unset, but the default 4-node compose file still doesn't export MORPH_NODE_L1_SEQUENCER_CONTRACT for the standard node services. The stock stack will fail before Tendermint starts.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@node/cmd/node/main.go` around lines 243 - 253, initL1SequencerComponents
currently treats a missing contractAddr as a hard error and returns an error,
which breaks default stacks that don't set MORPH_NODE_L1_SEQUENCER_CONTRACT;
change this to be optional: in initL1SequencerComponents (the contractAddr check
/ verifier initialization block) instead of returning fmt.Errorf when
contractAddr == (common.Address{}), log an info/warn and leave verifier nil so
the node can start without an L1 sequencer, or alternatively update deployment
configs to export MORPH_NODE_L1_SEQUENCER_CONTRACT; pick one approach and make
it consistent (adjust the contractAddr check, the error return, and any callers
expecting an error) and ensure logger.Info/Warning references contractAddr and
that callers handle a nil *l1sequencer.SequencerVerifier.
| func NewSequencerVerifier(caller *bindings.L1SequencerCaller, logger tmlog.Logger) *SequencerVerifier { | ||
| return &SequencerVerifier{ | ||
| ctx, cancel := context.WithCancel(context.Background()) | ||
| v := &SequencerVerifier{ | ||
| caller: caller, | ||
| logger: logger.With("module", "l1sequencer_verifier"), | ||
| cancel: cancel, | ||
| } | ||
| if err := v.syncHistory(); err != nil { | ||
| v.logger.Error("Failed to load sequencer history from L1", "err", err) | ||
| } | ||
| go v.refreshLoop(ctx) | ||
| return v |
There was a problem hiding this comment.
Don't boot with an empty sequencer history.
If syncHistory() fails here, the node still starts with VerificationStartHeight()==math.MaxUint64 and never calls upgrade.SetUpgradeBlockHeight until a later refresh succeeds. A transient L1/finalization issue at boot can therefore keep the node on the pre-upgrade path past the real activation height.
Suggested direction
-func NewSequencerVerifier(caller *bindings.L1SequencerCaller, logger tmlog.Logger) *SequencerVerifier {
+func NewSequencerVerifier(caller *bindings.L1SequencerCaller, logger tmlog.Logger) (*SequencerVerifier, error) {
ctx, cancel := context.WithCancel(context.Background())
v := &SequencerVerifier{
caller: caller,
logger: logger.With("module", "l1sequencer_verifier"),
cancel: cancel,
}
if err := v.syncHistory(); err != nil {
- v.logger.Error("Failed to load sequencer history from L1", "err", err)
+ cancel()
+ return nil, fmt.Errorf("initial sequencer history sync failed: %w", err)
}
go v.refreshLoop(ctx)
- return v
+ return v, nil
}Caller update needed in node/cmd/node/main.go to propagate the error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@node/l1sequencer/verifier.go` around lines 49 - 60, NewSequencerVerifier
currently swallows syncHistory() errors and allows the node to boot with an
empty sequencer history (VerificationStartHeight()==math.MaxUint64), so make
NewSequencerVerifier return an error when syncHistory() fails and propagate that
error up to the caller in node/cmd/node/main.go; specifically, change
NewSequencerVerifier(caller *bindings.L1SequencerCaller, logger tmlog.Logger)
*SequencerVerifier to return (*SequencerVerifier, error), call v.syncHistory()
and if it fails return nil, err (instead of logging), update callers in
node/cmd/node/main.go to handle the error and abort startup rather than
continuing, and ensure existing uses of refreshLoop(ctx) and cancel are
preserved on successful creation.
| # Initialize L1Sequencer history for V2 mode | ||
| # Register the first sequencer (node-0's staking address) at upgrade height | ||
| l1_sequencer_addr = addresses.get('Proxy__L1Sequencer', '') | ||
| if l1_sequencer_addr: | ||
| upgrade_height = os.environ.get('UPGRADE_HEIGHT', '10') | ||
| sequencer_addr = deploy_config['l2StakingAddresses'][0] # node-0's address | ||
| deployer_pk = '0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80' | ||
| log.info(f'Initializing L1Sequencer history: sequencer={sequencer_addr}, startL2Block={upgrade_height}') | ||
| try: | ||
| run_command([ | ||
| 'cast', 'send', l1_sequencer_addr, | ||
| 'initializeHistory(address,uint64)', | ||
| sequencer_addr, str(upgrade_height), | ||
| '--rpc-url', 'http://127.0.0.1:9545', | ||
| '--private-key', deployer_pk | ||
| ]) | ||
| log.info('L1Sequencer history initialized successfully') | ||
| except Exception as e: | ||
| log.info(f'L1Sequencer initializeHistory failed (may already be initialized): {e}') |
There was a problem hiding this comment.
Don't treat every initializeHistory failure as "already initialized".
This call now seeds both sequencer authorization and upgrade height. Swallowing all exceptions here turns real deployment failures into late, confusing test failures; only the specific expected reinit error should be ignored.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ops/docker-sequencer-test/run-test.sh` around lines 207 - 225, The
initializeHistory call on l1_sequencer_addr should not swallow all exceptions;
update the try/except around run_command for initializeHistory(address,uint64)
so that only the specific “already initialized” error is ignored and all other
failures are surfaced. Locate the block using l1_sequencer_addr, sequencer_addr,
deployer_pk and run_command and change the except to inspect the exception (or
run_command result) for the known re-init/duplicate-initialization message and
log a benign message then continue; for any other exception, re-raise or log as
an error and exit non-zero so real deployment failures fail fast. Ensure the log
messages still include the exception text when surfacing real errors.
| # Wait for L1 finalized block to reach at least the given height. | ||
| # This ensures contract data (e.g., initializeHistory) is visible via | ||
| # the finalized block tag when L2 nodes start their verifier sync. | ||
| wait_for_l1_finalized() { | ||
| local min_block=${1:-1} | ||
| local l1_rpc="${2:-http://127.0.0.1:9545}" | ||
| local max_wait=120 | ||
| local waited=0 | ||
|
|
||
| log_info "Waiting for L1 finalized block >= $min_block..." | ||
| while [ $waited -lt $max_wait ]; do | ||
| local fin | ||
| fin=$(curl -s -X POST -H "Content-Type: application/json" \ | ||
| --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["finalized",false],"id":1}' \ | ||
| "$l1_rpc" 2>/dev/null | grep -o '"number":"0x[^"]*"' | head -1 | cut -d'"' -f4) | ||
| if [ -n "$fin" ]; then | ||
| local fin_dec | ||
| fin_dec=$(printf "%d" "$fin" 2>/dev/null || echo 0) | ||
| if [ "$fin_dec" -ge "$min_block" ]; then | ||
| log_success "L1 finalized block: $fin_dec (>= $min_block)" | ||
| return 0 | ||
| fi | ||
| echo -ne "\r L1 finalized: $fin_dec / $min_block" | ||
| fi | ||
| sleep 3 | ||
| waited=$((waited + 3)) | ||
| done | ||
| log_warn "Timeout waiting for L1 finalized >= $min_block (continuing anyway)" | ||
| } |
There was a problem hiding this comment.
Fail fast when L1 finality never catches up.
The verifier reads initializeHistory via the finalized tag. Returning success after the timeout reintroduces the exact startup race this helper was added to avoid.
Suggested fix
- log_warn "Timeout waiting for L1 finalized >= $min_block (continuing anyway)"
+ log_error "Timeout waiting for L1 finalized >= $min_block"
+ return 1
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Wait for L1 finalized block to reach at least the given height. | |
| # This ensures contract data (e.g., initializeHistory) is visible via | |
| # the finalized block tag when L2 nodes start their verifier sync. | |
| wait_for_l1_finalized() { | |
| local min_block=${1:-1} | |
| local l1_rpc="${2:-http://127.0.0.1:9545}" | |
| local max_wait=120 | |
| local waited=0 | |
| log_info "Waiting for L1 finalized block >= $min_block..." | |
| while [ $waited -lt $max_wait ]; do | |
| local fin | |
| fin=$(curl -s -X POST -H "Content-Type: application/json" \ | |
| --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["finalized",false],"id":1}' \ | |
| "$l1_rpc" 2>/dev/null | grep -o '"number":"0x[^"]*"' | head -1 | cut -d'"' -f4) | |
| if [ -n "$fin" ]; then | |
| local fin_dec | |
| fin_dec=$(printf "%d" "$fin" 2>/dev/null || echo 0) | |
| if [ "$fin_dec" -ge "$min_block" ]; then | |
| log_success "L1 finalized block: $fin_dec (>= $min_block)" | |
| return 0 | |
| fi | |
| echo -ne "\r L1 finalized: $fin_dec / $min_block" | |
| fi | |
| sleep 3 | |
| waited=$((waited + 3)) | |
| done | |
| log_warn "Timeout waiting for L1 finalized >= $min_block (continuing anyway)" | |
| } | |
| # Wait for L1 finalized block to reach at least the given height. | |
| # This ensures contract data (e.g., initializeHistory) is visible via | |
| # the finalized block tag when L2 nodes start their verifier sync. | |
| wait_for_l1_finalized() { | |
| local min_block=${1:-1} | |
| local l1_rpc="${2:-http://127.0.0.1:9545}" | |
| local max_wait=120 | |
| local waited=0 | |
| log_info "Waiting for L1 finalized block >= $min_block..." | |
| while [ $waited -lt $max_wait ]; do | |
| local fin | |
| fin=$(curl -s -X POST -H "Content-Type: application/json" \ | |
| --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["finalized",false],"id":1}' \ | |
| "$l1_rpc" 2>/dev/null | grep -o '"number":"0x[^"]*"' | head -1 | cut -d'"' -f4) | |
| if [ -n "$fin" ]; then | |
| local fin_dec | |
| fin_dec=$(printf "%d" "$fin" 2>/dev/null || echo 0) | |
| if [ "$fin_dec" -ge "$min_block" ]; then | |
| log_success "L1 finalized block: $fin_dec (>= $min_block)" | |
| return 0 | |
| fi | |
| echo -ne "\r L1 finalized: $fin_dec / $min_block" | |
| fi | |
| sleep 3 | |
| waited=$((waited + 3)) | |
| done | |
| log_error "Timeout waiting for L1 finalized >= $min_block" | |
| return 1 | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ops/docker-sequencer-test/run-test.sh` around lines 273 - 301, The
wait_for_l1_finalized function currently continues on timeout which reintroduces
the startup race; modify it so that when the loop times out it logs an error and
exits non‑zero instead of logging a warning and continuing. Specifically, in
wait_for_l1_finalized (the function that polls eth_getBlockByNumber
"finalized"), replace the final log_warn "Timeout..." branch with a log_error
(or log_fatal) that includes the min_block and l1_rpc context and then call exit
1 so the script fails fast when L1 finality never reaches the required height.
Summary
HistoryRecordstruct andinitializeHistory/updateSequencer/getSequencerHistory/getSequencerAtinterfaces for on-chain sequencer history management, with full unit tests and regenerated Go bindingsIsSequencerAt/VerificationStartHeightinterfaces; background refresh loop with exponential backoff--consensus.switchHeightCLI flag andMORPH_NODE_CONSENSUS_SWITCH_HEIGHTenv var; verifier now callsupgrade.SetUpgradeBlockHeighton first successful history load, making the L1 contract the single source of truthrun-test.shgainsbuild-malicious/p2p-testcommands;wait_for_l1_finalizedensures L1 finality before L2 nodes start so verifier can read history at construction timeTest plan
go build ./node/...compiles cleanlyheight=10at startup🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
Refactor
Tests
Chores